Picture for Taiqiang Wu

Taiqiang Wu

BPDQ: Bit-Plane Decomposition Quantization on a Variable Grid for Large Language Models

Add code
Feb 04, 2026
Viaarxiv icon

LINA: Linear Autoregressive Image Generative Models with Continuous Tokens

Add code
Jan 30, 2026
Viaarxiv icon

ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection

Add code
Jan 14, 2026
Viaarxiv icon

MMFormalizer: Multimodal Autoformalization in the Wild

Add code
Jan 06, 2026
Viaarxiv icon

QuadINR: Hardware-Efficient Implicit Neural Representations Through Quadratic Activation

Add code
Aug 20, 2025
Viaarxiv icon

SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving

Add code
May 29, 2025
Figure 1 for SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving
Figure 2 for SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving
Figure 3 for SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving
Figure 4 for SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving
Viaarxiv icon

PhyX: Does Your Model Have the "Wits" for Physical Reasoning?

Add code
May 21, 2025
Figure 1 for PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
Figure 2 for PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
Figure 3 for PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
Figure 4 for PhyX: Does Your Model Have the "Wits" for Physical Reasoning?
Viaarxiv icon

Shadow-FT: Tuning Instruct via Base

Add code
May 19, 2025
Viaarxiv icon

HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture

Add code
Feb 27, 2025
Figure 1 for HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Figure 2 for HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Figure 3 for HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Figure 4 for HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Viaarxiv icon

LLM-Neo: Parameter Efficient Knowledge Distillation for Large Language Models

Add code
Nov 11, 2024
Viaarxiv icon